Search Results: "tas"

30 December 2023

Valhalla's Things: I've been influenced

Posted on December 30, 2023
Tags: madeof:atoms
A woman wearing a red sleeveless dress; from the waist up it is fitted, while the skirt flares out. There is a white border with red embroidery and black fringe at the hem and a belt of the same material at the waist. By the influencers on the famous proprietary video platform1. When I m crafting with no powertools I tend to watch videos, and this autumn I ve seen a few in a row that were making red wool dresses, at least one or two medieval kirtles. I don t remember which channels they were, and I ve decided not to go back and look for them, at least for a time. A woman wearing a red shirt with wide sleeves, a short yoke, a small collar band and 3 buttons in the front. Anyway, my brain suddenly decided that I needed a red wool dress, fitted enough to give some bust support. I had already made a dress that satisfied the latter requirement and I still had more than half of the red wool faille I ve used for the Garibaldi blouse (still not blogged, but I will get to it), and this time I wanted it to be ready for this winter. While the pattern I was going to use is Victorian, it was designed for underwear, and this was designed to be outerwear, so from the very start I decided not to bother too much with any kind of historical details or techniques. A few meters of wool-imitation fringe trim rolled up; the fringe is black and is attached to a white band with a line of lozenge outlines in red and brown. I knew that I didn t have enough fabric to add a flounce to the hem, as in the cotton dress, but then I remembered that some time ago I fell for a piece of fringed trim in black, white and red. I did a quick check that the red wasn t clashing (it wasn t) and I knew I had a plan for the hem decoration. Then I spent a week finishing other projects, and the more I thought about this dress, the more I was tempted to have spiral lacing at the front rather than buttons, as a nod to the kirtle inspiration. It may end up be a bit of a hassle, but if it is too much I can always add a hidden zipper on a side seam, and only have to undo a bit of the lacing around the neckhole to wear the dress. Finally, I could start working on the dress: I cut all of the main pieces, and since the seam lines were quite curved I marked them with tailor s tacks, which I don t exactly enjoy doing or removing, but are the only method that was guaranteed to survive while manipulating this fabric (and not leave traces afterwards). A shaped piece of red fabric with the long edges bound in navy blue bias tape and all the seamlines marked with basting thread. While cutting the front pieces I accidentally cut the high neck line instead of the one I had used on the cotton dress: I decided to go for it also on the back pieces and decide later whether I wanted to lower it. Since this is a modern dress, with no historical accuracy at all, and I have access to a serger, I decided to use some dark blue cotton voile I ve had in my stash for quite some time, cut into bias strip, to bind the raw edges before sewing. This works significantly better than bought bias tape, which is a bit too stiff for this. A bigger piece of fabric with tailor's tacks for the seams and darts; at the top edge there is a strip of navy blue fabric sewn to a wide seaming allowance, with two rows of cording closest to the center front line. For the front opening, I ve decided to reinforce the areas where the lacing holes will be with cotton: I ve used some other navy blue cotton, also from the stash, and added two lines of cording to stiffen the front edge. So I ve cut the front in two pieces rather than on the fold, sewn the reinforcements to the sewing allowances in such a way that the corded edge was aligned with the center front and then sewn the bottom of the front seam from just before the end of the reinforcements to the hem. The front opening being worked on: on one side there are hand sewn eyelets in red silk that matches the fabric, on the other side the position for more eyelets are still marked with pins. There is also still basting to keep the folded allowance in place. The allowances are then folded back, and then they are kept in place by the worked lacing holes. The cotton was pinked, while for the wool I used the selvedge of the fabric and there was no need for any finishing. Behind the opening I ve added a modesty placket: I ve cut a strip of red wool, a strip of cotton, folded the edge of the strip of cotton to the center, added cording to the long sides, pressed the allowances of the wool towards the wrong side, and then handstitched the cotton to the wool, wrong sides facing. This was finally handstitched to one side of the sewing allowance of the center front. I ve also decided to add real pockets, rather than just slits, and for some reason I decided to add them by hand after I had sewn the dress, so I ve left opening in the side back seams, where the slits were in the cotton dress. I ve also already worn the dress, but haven t added the pockets yet, as I m still debating about their shape. This will be fixed in the near future. Another thing that will have to be fixed is the trim situation: I like the fringe at the bottom, and I had enough to also make a belt, but this makes the top of the dress a bit empty. I can t use the same fringe tape, as it is too wide, but it would be nice to have something smaller that matches the patterned part. And I think I can make something suitable with tablet weaving, but I m not sure on which materials to use, so it will have to be on hold for a while, until I decide on the supplies and have the time for making it. Another improvement I d like to add are detached sleeves, both matching (I should still have just enough fabric) and contrasting, but first I want to learn more about real kirtle construction, and maybe start making sleeves that would be suitable also for a real kirtle. Meanwhile, I ve worn it on Christmas (over my 1700s menswear shirt with big sleeves) and may wear it again tomorrow (if I bother to dress up to spend New Year s Eve at home :D )

  1. yep, that s YouTube, of course.

29 December 2023

Russ Allbery: Review: The Afterward

Review: The Afterward, by E.K. Johnston
Publisher: Dutton Books
Copyright: February 2019
Printing: 2020
ISBN: 0-7352-3190-7
Format: Kindle
Pages: 339
The Afterward is a standalone young adult high fantasy with a substantial romance component. The title is not misspelled. Sir Erris and her six companions, matching the number of the new gods, were successful in their quest for the godsgem. They defeated the Old God and destroyed Him forever, freeing King Dorrenta from his ensorcellment, and returned in triumph to Cadrium to live happily ever after. Or so the story goes. Sir Erris and three of the companions are knights. Another companion is the best mage in the kingdom. Kalanthe Ironheart, who distracted the Old God at a critical moment and allowed Sir Erris to strike, is only an apprentice due to her age, but surely will become a great knight. And then there is Olsa Rhetsdaughter, the lowborn thief, now somewhat mockingly called Thief of the Realm for all the good that does her. The reward was enough for her to buy her freedom from the Thief's Court. It was not enough to pay for food after that, or enough for her to change her profession, and the Thief's Court no longer has any incentive to give her easy (or survivable) assignments. Kalanthe is in a considerably better position, but she still needs a good marriage. Her reward paid off half of her debt, which broadens her options, but she's still a debt-knight, liable for the full cost of her training once she reaches the age of nineteen. She's mostly made her peace with the decisions she made given her family's modest means, but marriages of that type are usually for heirs, and Kalanthe is not looking forward to bearing a child. Or, for that matter, sleeping with a man. Olsa and Kalanthe fell in love during the Quest. Given Kalanthe's debt and the way it must be paid, and her iron-willed determination to keep vows, neither of them expected their relationship to survive the end of the Quest. Both of them wish that it had. The hook is that this novel picks up after the epic fantasy quest is over and everyone went home. This is not an entirely correct synopsis; chapters of The Afterward alternate between "After" and "Before" (and one chapter delightfully titled "More or less the exact moment of"), and by the end of the book we get much of the story of the Quest. It's not told from the perspective of the lead heroes, though; it's told by following Kalanthe and Olsa, who would be firmly relegated to supporting characters in a typical high fantasy. And it's largely told through the lens of their romance. This is not the best fantasy novel I've read, but I had a fun time with it. I am now curious about the intended audience and marketing, though. It was published by a YA imprint, and both the ages of the main characters and the general theme of late teenagers trying to chart a course in an adult world match that niche. But it's also clearly intended for readers who have read enough epic fantasy quests that they will both be amused by the homage and not care that the story elides a lot of the typical details. Anyone who read David Eddings at an impressionable age will enjoy the way Johnston pokes gentle fun at The Belgariad (this book is dedicated to David and Leigh Eddings), but surely the typical reader of YA fantasy these days isn't also reading Eddings. I'm therefore not quite sure who this book was for, but apparently that group included me. Johnston thankfully is not on board with the less savory parts of Eddings's writing, as you might have guessed from the sapphic romance. There is no obnoxious gender essentialism here, although there do appear to be gender roles that I never quite figured out. Knights are referred to as sir, but all of the knights in this story are women. Men still seem to run a lot of things (kingdoms, estates, mage colleges), but apart from the mage, everyone on the Quest was female, and there seems to be an expectation that women go out into the world and have adventures while men stay home. I'm not sure if there was an underlying system that escaped me, or if Johnston just mixed things up for the hell of it. (If the latter, I approve.) This book does suffer a bit from addressing some current-day representation issues without managing to fold them naturally into the story or setting. One of the Quest knights is transgender, something that's revealed in a awkward couple of paragraphs and then never mentioned again. Two of the characters have a painfully earnest conversation about the word "bisexual," complete with a strained attempt at in-universe etymology. Racial diversity (Olsa is black, and Kalanthe is also not white) seemed to be handled a bit better, although I am not the reader to notice if the discussions of hair maintenance were similarly awkward. This is way better than no representation and default-white characters, to be clear, but it felt a bit shoehorned in at times and could have used some more polish. These are quibbles, though. Olsa was the heart of the book for me, and is exactly the sort of character I like to read about. Kalanthe is pure stubborn paladin, but I liked her more and more as the story continued. She provides a good counterbalance to Olsa's natural chaos. I do wish Olsa had more opportunities to show her own competence (she's not a very good thief, she's just the thief that Sir Erris happened to know), but the climax of the story was satisfying. My main grumble is that I badly wanted to dwell on the happily-ever-after for at least another chapter, ideally two. Johnston was done with the story before I was. The writing was serviceable but not great and there are some bits that I don't think would stand up to a strong poke, but the characters carried the story for me. Recommended if you'd like some sapphic romance and lightweight class analysis complicating your Eddings-style quest fantasy. Rating: 7 out of 10

28 December 2023

Russ Allbery: Review: Nettle & Bone

Review: Nettle & Bone, by T. Kingfisher
Publisher: Tor
Copyright: 2022
ISBN: 1-250-24403-X
Format: Kindle
Pages: 242
Nettle & Bone is a standalone fantasy novel with fairy tale vibes. T. Kingfisher is a pen name for Ursula Vernon. As the book opens, Marra is giving herself a blood infection by wiring together dog bones out of a charnel pit. This is the second of three impossible tasks that she was given by the dust-wife. Completing all three will give her the tools to kill a prince. I am a little cautious of which T. Kingfisher books I read since she sometimes writes fantasy and sometimes writes horror and I don't get along with horror. This one seemed a bit horrific in the marketing, so I held off on reading it despite the Hugo nomination. It turns out to be just on the safe side of my horror tolerance, with only a couple of parts that I read a bit quickly. One of those is the opening, which I am happy to report does not set the tone for the rest of the book. Marra starts the story in a wasteland full of disease, madmen, and cannibals (who, in typical Ursula Vernon fashion, turn out to be nicer than the judgmental assholes outside of the blistered land). She doesn't stay there long. By chapter two, the story moves on to flashbacks explaining how Marra ended up there, alternating with further (and less horrific) steps in her quest to kill the prince of the Northern Kingdom. Marra is a princess of a small, relatively poor coastal kingdom with a good harbor and acquisitive neighbors. Her mother, the queen, has protected the kingdom through arranged marriage of her daughters to the prince of the Northern Kingdom, who rules it in all but name given the mental deterioration of his father the king. Marra's eldest sister Damia was first, but she died suddenly and mysteriously in a fall. (If you're thinking about the way women are injured by "accident," you have the right idea.) Kania, the middle sister, is next to marry; she lives, but not without cost. Meanwhile, Marra is sent off to a convent to ensure that there are no complicating potential heirs, and to keep her on hand as a spare. I won't spoil the entire backstory, but you do learn it all. Marra is a typical Kingfisher protagonist: a woman who is way out of her depth who persists with stubbornness, curiosity, and innate decency because what else is there to do? She accumulates the typical group of misfits and oddballs common in Kingfisher's quest fantasies, characters that in the Chosen One male fantasy would be supporting characters at best. The bone-wife is a delight; her chicken is even better. There are fairy godmothers and a goblin market and a tooth extraction that was one of the creepiest things I've read without actually being horror. It is, in short, a Kingfisher fantasy novel, with a touch more horror than average but not enough to push it out of the fantasy genre. I think my favorite part of this book was not the main quest. It was the flashback scenes set in the convent, where Marra has the space (and the mentorship) to develop her sense of self.
"We're a mystery religion," said the abbess, when she'd had a bit more wine than usual, "for people who have too much work to do to bother with mysteries. So we simply get along as best we can. Occasionally someone has a vision, but [the goddess] doesn't seem to want anything much, and so we try to return the favor."
If you have read any other Kingfisher novels, much of this will be familiar: the speculative asides, the dogged determination, the slightly askew nature of the world, the vibes-based world-building that feels more like a fairy tale than a carefully constructed magic system, and the sense that the main characters (and nearly all of the supporting characters) are average people trying to play the hands they were dealt as ethically as they can. You will know that the tentative and woman-initiated romance is coming as soon as the party meets the paladin type who is almost always the romantic interest in one of these books. The emotional tone of the book is a bit predictable for regular readers, but Ursula Vernon's brain is such a delightful place to spend some time that I don't mind.
Marra had not managed to be pale and willowy and consumptive at any point in eighteen years of life and did not think she could achieve it before she died.
Nettle & Bone won the Hugo for Best Novel in 2023. I'm not sure why this specific T. Kingfisher novel won and not any of the half-dozen earlier novels she's written in a similar style, but sure, I have no objections. I'm glad one of them won; they're all worth reading and hopefully that will help more people discover this delightful style of fantasy that doesn't feel like what anyone else is doing. Recommended, although be prepared for a few more horror touches than normal and a rather grim first chapter. Content warnings: domestic abuse. The dog... lives? Is equally as alive at the end of the book as it was at the end of the first chapter? The dog does not die; I'll just leave it at that. (Neither does the chicken.) Rating: 8 out of 10

25 December 2023

Sergio Talens-Oliag: GitLab CI/CD Tips: Automatic Versioning Using semantic-release

This post describes how I m using semantic-release on gitlab-ci to manage versioning automatically for different kinds of projects following a simple workflow (a develop branch where changes are added or merged to test new versions, a temporary release/#.#.# to generate the release candidate versions and a main branch where the final versions are published).

What is semantic-releaseIt is a Node.js application designed to manage project versioning information on Git Repositories using a Continuous integration system (in this post we will use gitlab-ci)

How does it workBy default semantic-release uses semver for versioning (release versions use the format MAJOR.MINOR.PATCH) and commit messages are parsed to determine the next version number to publish. If after analyzing the commits the version number has to be changed, the command updates the files we tell it to (i.e. the package.json file for nodejs projects and possibly a CHANGELOG.md file), creates a new commit with the changed files, creates a tag with the new version and pushes the changes to the repository. When running on a CI/CD system we usually generate the artifacts related to a release (a package, a container image, etc.) from the tag, as it includes the right version number and usually has passed all the required tests (it is a good idea to run the tests again in any case, as someone could create a tag manually or we could run extra jobs when building the final assets if they fail it is not a big issue anyway, numbers are cheap and infinite, so we can skip releases if needed).

Commit messages and versioningThe commit messages must follow a known format, the default module used to analyze them uses the angular git commit guidelines, but I prefer the conventional commits one, mainly because it s a lot easier to use when you want to update the MAJOR version. The commit message format used must be:
<type>(optional scope): <description>
[optional body]
[optional footer(s)]
The system supports three types of branches: release, maintenance and pre-release, but for now I m not using maintenance ones. The branches I use and their types are:
  • main as release branch (final versions are published from there)
  • develop as pre release branch (used to publish development and testing versions with the format #.#.#-SNAPSHOT.#)
  • release/#.#.# as pre release branches (they are created from develop to publish release candidate versions with the format #.#.#-rc.# and once they are merged with main they are deleted)
On the release branch (main) the version number is updated as follows:
  1. The MAJOR number is incremented if a commit with a BREAKING CHANGE: footer or an exclamation (!) after the type/scope is found in the list of commits found since the last version change (it looks for tags on the same branch).
  2. The MINOR number is incremented if the MAJOR number is not going to be changed and there is a commit with type feat in the commits found since the last version change.
  3. The PATCH number is incremented if neither the MAJOR nor the MINOR numbers are going to be changed and there is a commit with type fix in the the commits found since the last version change.
On the pre release branches (develop and release/#.#.#) the version and pre release numbers are always calculated from the last published version available on the branch (i. e. if we published version 1.3.2 on main we need to have the commit with that tag on the develop or release/#.#.# branch to get right what will be the next version). The version number is updated as follows:
  1. The MAJOR number is incremented if a commit with a BREAKING CHANGE: footer or an exclamation (!) after the type/scope is found in the list of commits found since the last released version.In our example it was 1.3.2 and the version is updated to 2.0.0-SNAPSHOT.1 or 2.0.0-rc.1 depending on the branch.
  2. The MINOR number is incremented if the MAJOR number is not going to be changed and there is a commit with type feat in the commits found since the last released version.In our example the release was 1.3.2 and the version is updated to 1.4.0-SNAPSHOT.1 or 1.4.0-rc.1 depending on the branch.
  3. The PATCH number is incremented if neither the MAJOR nor the MINOR numbers are going to be changed and there is a commit with type fix in the the commits found since the last version change.In our example the release was 1.3.2 and the version is updated to 1.3.3-SNAPSHOT.1 or 1.3.3-rc.1 depending on the branch.
  4. The pre release number is incremented if the MAJOR, MINOR and PATCH numbers are not going to be changed but there is a commit that would otherwise update the version (i.e. a fix on 1.3.3-SNAPSHOT.1 will set the version to 1.3.3-SNAPSHOT.2, a fix or feat on 1.4.0-rc.1 will set the version to 1.4.0-rc.2 an so on).

How do we manage its configurationAlthough the system is designed to work with nodejs projects, it can be used with multiple programming languages and project types. For nodejs projects the usual place to put the configuration is the project s package.json, but I prefer to use the .releaserc file instead. As I use a common set of CI templates, instead of using a .releaserc on each project I generate it on the fly on the jobs that need it, replacing values related to the project type and the current branch on a template using the tmpl command (lately I use a branch of my own fork while I wait for some feedback from upstream, as you will see on the Dockerfile).

Container used to run itAs we run the command on a gitlab-ci job we use the image built from the following Dockerfile:
Dockerfile
# Semantic release image
FROM golang:alpine AS tmpl-builder
#RUN go install github.com/krakozaure/tmpl@v0.4.0
RUN go install github.com/sto/tmpl@v0.4.0-sto.2
FROM node:lts-alpine
COPY --from=tmpl-builder /go/bin/tmpl /usr/local/bin/tmpl
RUN apk update &&\
  apk upgrade &&\
  apk add curl git jq openssh-keygen yq zip &&\
  npm install --location=global\
    conventional-changelog-conventionalcommits@6.1.0\
    @qiwi/multi-semantic-release@7.0.0\
    semantic-release@21.0.7\
    @semantic-release/changelog@6.0.3\
    semantic-release-export-data@1.0.1\
    @semantic-release/git@10.0.1\
    @semantic-release/gitlab@9.5.1\
    @semantic-release/release-notes-generator@11.0.4\
    semantic-release-replace-plugin@1.2.7\
    semver@7.5.4\
  &&\
  rm -rf /var/cache/apk/*
CMD ["/bin/sh"]

How and when is it executedThe job that runs semantic-release is executed when new commits are added to the develop, release/#.#.# or main branches (basically when something is merged or pushed) and after all tests have passed (we don t want to create a new version that does not compile or passes at least the unit tests). The job is something like the following:
semantic_release:
  image: $SEMANTIC_RELEASE_IMAGE
  rules:
    - if: '$CI_COMMIT_BRANCH =~ /^(develop main release\/\d+.\d+.\d+)$/'
      when: always
  stage: release
  before_script:
    - echo "Loading scripts.sh"
    - . $ASSETS_DIR/scripts.sh
  script:
    - sr_gen_releaserc_json
    - git_push_setup
    - semantic-release
Where the SEMANTIC_RELEASE_IMAGE variable contains the URI of the image built using the Dockerfile above and the sr_gen_releaserc_json and git_push_setup are functions defined on the $ASSETS_DIR/scripts.sh file:
  • The sr_gen_releaserc_json function generates the .releaserc.json file using the tmpl command.
  • The git_push_setup function configures git to allow pushing changes to the repository with the semantic-release command, optionally signing them with a SSH key.

The sr_gen_releaserc_json functionThe code for the sr_gen_releaserc_json function is the following:
sr_gen_releaserc_json()
 
  # Use nodejs as default project_type
  project_type="$ PROJECT_TYPE:-nodejs "
  # REGEX to match the rc_branch name
  rc_branch_regex='^release\/[0-9]\+\.[0-9]\+\.[0-9]\+$'
  # PATHS on the local ASSETS_DIR
  assets_dir="$ CI_PROJECT_DIR /$ ASSETS_DIR "
  sr_local_plugin="$ assets_dir /local-plugin.cjs"
  releaserc_tmpl="$ assets_dir /releaserc.json.tmpl"
  pipeline_runtime_values_yaml="/tmp/releaserc_values.yaml"
  pipeline_values_yaml="$ assets_dir /values_$ project_type _project.yaml"
  # Destination PATH
  releaserc_json=".releaserc.json"
  # Create an empty pipeline_values_yaml if missing
  test -f "$pipeline_values_yaml"   : >"$pipeline_values_yaml"
  # Create the pipeline_runtime_values_yaml file
  echo "branch: $ CI_COMMIT_BRANCH " >"$pipeline_runtime_values_yaml"
  echo "gitlab_url: $ CI_SERVER_URL " >"$pipeline_runtime_values_yaml"
  # Add the rc_branch name if we are on an rc_branch
  if [ "$(echo "$CI_COMMIT_BRANCH"   sed -ne "/$rc_branch_regex/ p ")" ]; then
    echo "rc_branch: $ CI_COMMIT_BRANCH " >>"$pipeline_runtime_values_yaml"
  elif [ "$(echo "$CI_MERGE_REQUEST_SOURCE_BRANCH_NAME"  
      sed -ne "/$rc_branch_regex/ p ")" ]; then
    echo "rc_branch: $ CI_MERGE_REQUEST_SOURCE_BRANCH_NAME " \
      >>"$pipeline_runtime_values_yaml"
  fi
  echo "sr_local_plugin: $ sr_local_plugin " >>"$pipeline_runtime_values_yaml"
  # Create the releaserc_json file
  tmpl -f "$pipeline_runtime_values_yaml" -f "$pipeline_values_yaml" \
    "$releaserc_tmpl"   jq . >"$releaserc_json"
  # Remove the pipeline_runtime_values_yaml file
  rm -f "$pipeline_runtime_values_yaml"
  # Print the releaserc_json file
  print_file_collapsed "$releaserc_json"
  # --*-- BEG: NOTE --*--
  # Rename the package.json to ignore it when calling semantic release.
  # The idea is that the local-plugin renames it back on the first step of the
  # semantic-release process.
  # --*-- END: NOTE --*--
  if [ -f "package.json" ]; then
    echo "Renaming 'package.json' to 'package.json_disabled'"
    mv "package.json" "package.json_disabled"
  fi
 
Almost all the variables used on the function are defined by gitlab except the ASSETS_DIR and PROJECT_TYPE; in the complete pipelines the ASSETS_DIR is defined on a common file included by all the pipelines and the project type is defined on the .gitlab-ci.yml file of each project. If you review the code you will see that the file processed by the tmpl command is named releaserc.json.tmpl, its contents are shown here:
 
  "plugins": [
     - if .sr_local_plugin  
    "  .sr_local_plugin  ",
     - end  
    [
      "@semantic-release/commit-analyzer",
       
        "preset": "conventionalcommits",
        "releaseRules": [
            "breaking": true, "release": "major"  ,
            "revert": true, "release": "patch"  ,
            "type": "feat", "release": "minor"  ,
            "type": "fix", "release": "patch"  ,
            "type": "perf", "release": "patch"  
        ]
       
    ],
     - if .replacements  
    [
      "semantic-release-replace-plugin",
        "replacements":   .replacements   toJson    
    ],
     - end  
    "@semantic-release/release-notes-generator",
     - if eq .branch "main"  
    [
      "@semantic-release/changelog",
        "changelogFile": "CHANGELOG.md", "changelogTitle": "# Changelog"  
    ],
     - end  
    [
      "@semantic-release/git",
       
        "assets":   if .assets   .assets   toJson   else  []  end  ,
        "message": "ci(release): v$ nextRelease.version \n\n$ nextRelease.notes "
       
    ],
    [
      "@semantic-release/gitlab",
        "gitlabUrl": "  .gitlab_url  ", "successComment": false  
    ]
  ],
  "branches": [
      "name": "develop", "prerelease": "SNAPSHOT"  ,
     - if .rc_branch  
      "name": "  .rc_branch  ", "prerelease": "rc"  ,
     - end  
    "main"
  ]
 
The values used to process the template are defined on a file built on the fly (releaserc_values.yaml) that includes the following keys and values:
  • branch: the name of the current branch
  • gitlab_url: the URL of the gitlab server (the value is taken from the CI_SERVER_URL variable)
  • rc_branch: the name of the current rc branch; we only set the value if we are processing one because semantic-release only allows one branch to match the rc prefix and if we use a wildcard (i.e. release/*) but the users keep more than one release/#.#.# branch open at the same time the calls to semantic-release will fail for sure.
  • sr_local_plugin: the path to the local plugin we use (shown later)
The template also uses a values_$ project_type _project.yaml file that includes settings specific to the project type, the one for nodejs is as follows:
replacements:
  - files:
      - "package.json"
    from: "\"version\": \".*\""
    to: "\"version\": \"$ nextRelease.version \""
assets:
  - "CHANGELOG.md"
  - "package.json"
The replacements section is used to update the version field on the relevant files of the project (in our case the package.json file) and the assets section includes the files that will be committed to the repository when the release is published (looking at the template you can see that the CHANGELOG.md is only updated for the main branch, we do it this way because if we update the file on other branches it creates a merge nightmare and we are only interested on it for released versions anyway). The local plugin adds code to rename the package.json_disabled file to package.json if present and prints the last and next versions on the logs with a format that can be easily parsed using sed:
local-plugin.cjs
// Minimal plugin to:
// - rename the package.json_disabled file to package.json if present
// - log the semantic-release last & next versions
function verifyConditions(pluginConfig, context)  
  var fs = require('fs');
  if (fs.existsSync('package.json_disabled'))  
    fs.renameSync('package.json_disabled', 'package.json');
    context.logger.log( verifyConditions: renamed 'package.json_disabled' to 'package.json' );
   
 
function analyzeCommits(pluginConfig, context)  
  if (context.lastRelease && context.lastRelease.version)  
    context.logger.log( analyzeCommits: LAST_VERSION=$ context.lastRelease.version  );
   
 
function verifyRelease(pluginConfig, context)  
  if (context.nextRelease && context.nextRelease.version)  
    context.logger.log( verifyRelease: NEXT_VERSION=$ context.nextRelease.version  );
   
 
module.exports =  
  verifyConditions,
  analyzeCommits,
  verifyRelease
 

The git_push_setup functionThe code for the git_push_setup function is the following:
git_push_setup()
 
  # Update global credentials to allow git clone & push for all the group repos
  git config --global credential.helper store
  cat >"$HOME/.git-credentials" <<EOF
https://fake-user:$ GITLAB_REPOSITORY_TOKEN @gitlab.com
EOF
  # Define user name, mail and signing key for semantic-release
  user_name="$SR_USER_NAME"
  user_email="$SR_USER_EMAIL"
  ssh_signing_key="$SSH_SIGNING_KEY"
  # Export git user variables
  export GIT_AUTHOR_NAME="$user_name"
  export GIT_AUTHOR_EMAIL="$user_email"
  export GIT_COMMITTER_NAME="$user_name"
  export GIT_COMMITTER_EMAIL="$user_email"
  # Sign commits with ssh if there is a SSH_SIGNING_KEY variable
  if [ "$ssh_signing_key" ]; then
    echo "Configuring GIT to sign commits with SSH"
    ssh_keyfile="/tmp/.ssh-id"
    : >"$ssh_keyfile"
    chmod 0400 "$ssh_keyfile"
    echo "$ssh_signing_key"   tr -d '\r' >"$ssh_keyfile"
    git config gpg.format ssh
    git config user.signingkey "$ssh_keyfile"
    git config commit.gpgsign true
  fi
 
The function assumes that the GITLAB_REPOSITORY_TOKEN variable (set on the CI/CD variables section of the project or group we want) contains a token with read_repository and write_repository permissions on all the projects we are going to use this function. The SR_USER_NAME and SR_USER_EMAIL variables can be defined on a common file or the CI/CD variables section of the project or group we want to work with and the script assumes that the optional SSH_SIGNING_KEY is exported as a CI/CD default value of type variable (that is why the keyfile is created on the fly) and git is configured to use it if the variable is not empty.
Warning: Keep in mind that the variables GITLAB_REPOSITORY_TOKEN and SSH_SIGNING_KEY contain secrets, so probably is a good idea to make them protected (if you do that you have to make the develop, main and release/* branches protected too).
Warning: The semantic-release user has to be able to push to all the projects on those protected branches, it is a good idea to create a dedicated user and add it as a MAINTAINER for the projects we want (the MAINTAINERS need to be able to push to the branches), or, if you are using a Gitlab with a Premium license you can use the api to allow the semantic-release user to push to the protected branches without allowing it for any other user.

The semantic-release commandOnce we have the .releaserc file and the git configuration ready we run the semantic-release command. If the branch we are working with has one or more commits that will increment the version, the tool does the following (note that the steps are described are the ones executed if we use the configuration we have generated):
  1. It detects the commits that will increment the version and calculates the next version number.
  2. Generates the release notes for the version.
  3. Applies the replacements defined on the configuration (in our example updates the version field on the package.json file).
  4. Updates the CHANGELOG.md file adding the release notes if we are going to publish the file (when we are on the main branch).
  5. Creates a commit if all or some of the files listed on the assets key have changed and uses the commit message we have defined, replacing the variables for their current values.
  6. Creates a tag with the new version number and the release notes.
  7. As we are using the gitlab plugin after tagging it also creates a release on the project with the tag name and the release notes.

Notes about the git workflows and merges between branchesIt is very important to remember that semantic-release looks at the commits of a given branch when calculating the next version to publish, that has two important implications:
  1. On pre release branches we need to have the commit that includes the tag with the released version, if we don t have it the next version is not calculated correctly.
  2. It is a bad idea to squash commits when merging a branch to another one, if we do that we will lose the information semantic-release needs to calculate the next version and even if we use the right prefix for the squashed commit (fix, feat, ) we miss all the messages that would otherwise go to the CHANGELOG.md file.
To make sure that we have the right commits on the pre release branches we should merge the main branch changes into the develop one after each release tag is created; in my pipelines the fist job that processes a release tag creates a branch from the tag and an MR to merge it to develop. The important thing about that MR is that is must not be squashed, if we do that the tag commit will probably be lost, so we need to be careful. To merge the changes directly we can run the following code:
# Set the SR_TAG variable to the tag you want to process
SR_TAG="v1.3.2"
# Fetch all the changes
git fetch --all --prune
# Switch to the main branch
git switch main
# Pull all the changes
git pull
# Switch to the development branch
git switch develop
# Pull all the changes
git pull
# Create followup branch from tag
git switch -c "followup/$SR_TAG" "$SR_TAG"
# Change files manually & commit the changed files
git commit -a --untracked-files=no -m "ci(followup): $SR_TAG to develop"
# Switch to the development branch
git switch develop
# Merge the followup branch into the development one using the --no-ff option
git merge --no-ff "followup/$SR_TAG"
# Remove the followup branch
git branch -d "followup/$SR_TAG"
# Push the changes
git push
If we can t push directly to develop we can create a MR pushing the followup branch after committing the changes, but we have to make sure that we don t squash the commits when merging or it will not work as we want.

24 December 2023

Russ Allbery: Review: Liberty's Daughter

Review: Liberty's Daughter, by Naomi Kritzer
Publisher: Fairwood Press
Copyright: November 2023
ISBN: 1-958880-16-7
Format: Kindle
Pages: 257
Liberty's Daughter is a stand-alone near-future science fiction fix-up novel. The original stories were published in Fantasy and Science Fiction between 2012 and 2015. Beck Garrison lives on New Minerva (Min), one of a cluster of libertarian seasteads 220 nautical miles off the coast of Los Angeles. Her father brought her to Min when she was four, so it's the only life she knows. As this story opens, she's picked up a job for pocket change: finding very specific items that people want to buy. Since any new goods have to be shipped in and the seasteads have an ambiguous legal status, they don't get Amazon deliveries, but there are enough people (and enough tourists who bring high-value goods for trade) that someone probably has whatever someone else is looking for. Even sparkly high-heeled sandals size eight. Beck's father is high in the informal power structure of the seasteads for reasons that don't become apparent until very late in this book. Beck therefore has a comfortable, albeit cramped, life. The social protections, self-confidence, and feelings of invincibility that come with that wealth serve her well as a finder. After the current owner of the sandals bargains with her to find a person rather than an object, that privilege also lets her learn quite a lot before she starts getting into trouble. The political background of this novel is going to require some suspension of disbelief. The premise is that one of those harebrained libertarian schemes to form a freedom utopia has been successful enough to last for 49 years and attract 80,000 permanent residents. (It's a libertarian seastead so a lot of those residents are indentured slaves, as one does in libertarian philosophy. The number of people with shares, like Beck's father, is considerably smaller.) By the end of the book, Kritzer has offered some explanations for why the US would allow such a place to continue to exist, but the chances of the famously fractious con artists and incompetents involved in these types of endeavors creating something that survived internal power struggles for that long seem low. One has to roll with it for story reasons: Kritzer needs the population to be large enough for a plot, and the history to be long enough for Beck to exist as a character. The strength of this book is Beck, and specifically the fact that Beck is a second-generation teenager who grew up on the seastead. Unlike a lot of her age peers with their Cayman Islands vacations, she's never left and has no experience with life on land. She considers many things to be perfectly normal that are not at all normal to the reader and the various reader surrogates who show up over the course of the book. She also has the instinctive feel for seastead politics of the child of a prominent figure in a small town. And, most importantly, she has formed her own sense of morality and social structure that matches neither that of the reader nor that of her father. Liberty's Daughter is told in first-person by Beck. Judging the authenticity of Gen-Z thought processes is not one of my strengths, but Beck felt right to me. Her narration is dryly matter-of-fact, with only brief descriptions of her emotional reactions, but her personality shines in the occasional sarcasm and obstinacy. Kritzer has the teenage bafflement at the stupidity of adults down pat, as well as the tendency to jump head-first into ideas and make some decisions through sheer stubbornness. This is not one of those fix-up novels where the author has reworked the stories sufficiently that the original seams don't show. It is very episodic; compared to a typical novel of this length, there's more plot but less character growth. It's a good book when you want to be pulled into a stream of events that moves right along. This is not the book for deep philosophical examinations of the basis of a moral society, but it does have, around the edges, is the humans build human societies and develop elaborate social conventions and senses of belonging no matter how stupid the original philosophical foundations were. Even societies built on nasty exploitation can engender a sort of loyalty. Beck doesn't support the worst parts of her weird society, but she wants to fix it, not burn it to the ground. I thought there was a profound observation there. That brings me to my complaint: I hated the ending. Liberty's Daughter is in part Beck's fight for her own autonomy, both moral and financial, and the beginnings of an effort to turn her home into the sort of home she wants. By the end of the book, she's testing the limits of what she can accomplish, solidifying her own moral compass, and deciding how she wants to use the social position she inherited. It felt like the ending undermined all of that and treated her like a child. I know adolescence comes with those sorts of reversals, but I was still so mad. This is particularly annoying since I otherwise want to recommend this book. It's not ground-breaking, it's not that deep, but it was a thoroughly enjoyable day's worth of entertainment with a likable protagonist. Just don't read the last chapter, I guess? Or have more tolerance than I have for people treating sixteen-year-olds as if they're not old enough to make decisions. Content warnings: pandemic. Rating: 7 out of 10

23 December 2023

Russ Allbery: Review: Bookshops & Bonedust

Review: Bookshops & Bonedust, by Travis Baldree
Series: Legends & Lattes #2
Publisher: Tor
Copyright: 2023
ISBN: 1-250-88611-2
Format: Kindle
Pages: 337
Bookshops & Bonedust is a prequel to the cozy fantasy Legends & Lattes. You can read them in either order, although the epilogue of Bookshops & Bonedust spoils (somewhat guessable) plot developments in Legends & Lattes. Viv is a new member of the mercenary troop Rackam's Ravens and is still possessed of more enthusiasm than sense. As the story opens, she charges well ahead of her allies and nearly gets killed by a pike through the leg. She survives, but her leg needs time to heal and she is not up to the further pursuit of a necromancer. Rackam pays for a room and a doctor in the small seaside town of Murk and leaves her there to recuperate. The Ravens will pick her up when they come back through town, whenever that is. Viv is very quickly bored out of her skull. On a whim, and after some failures to find something else to occupy her, she tries a run-down local bookstore and promptly puts her foot through the boardwalk outside it. That's the start of an improbable friendship with the proprietor, a rattkin named Fern with a knack for book recommendations and a serious cash flow problem. Viv, being Viv, soon decides to make herself useful. The good side and bad side of this book are the same: it's essentially the same book as Legends & Lattes, but this time with a bookstore. There's a medieval sword and sorcery setting, a wide variety of humanoid species, a local business that needs love and attention (this time because it's failing instead of new), a lurking villain, an improbable store animal (this time a gryphlet that I found less interesting than the cat of the coffee shop), and a whole lot of found family. It turns out I was happy to read that story again, and there were some things I liked better in this version. I find bookstores more interesting than coffee shops, and although Viv and Fern go through a similar process of copying features of a modern bookstore, this felt less strained than watching Viv reinvent the precise equipment and menu of a modern coffee shop in a fantasy world. Also, Fern is an absolute delight, probably my favorite character in either of the books. I love the way that she uses book recommendations as a way of asking questions and guessing at answers about other people. As with the first book, Baldree's world-building is utterly unconcerned with trying to follow the faux-medieval conventions of either sword and sorcery or D&D-style role-playing games. On one hand, I like this; most of that so-called medievalism is nonsense anyway, and there's no reason why fantasy with D&D-style species diversity should be set in a medieval world. On the other hand, this world seems exactly like a US small town except the tavern also has rooms for rent, there are roving magical armies, and everyone fights with swords for some reason. It feels weirdly anachronistic, and I can't tell if that's because I've been brainwashed into thinking fantasy has to be medievaloid or if it's a true criticism of the book. I was reminded somewhat of reading Jack McDevitt's SF novels, which are supposedly set in the far future but are indistinguishable from 1980s suburbia except with flying cars. The other oddity with this book is that the reader of the series knows Viv isn't going to stay. This is the problem with writing a second iteration of this story as a prequel. I see why Baldree did it the story wouldn't have worked if Viv were already established but it casts a bit of a pall over the cheeriness of the story. Baldree to his credit confronts this directly, weaves it into the relationships, and salvages it a bit more in the epilogue, but it gave the story a sort of preemptive wistfulness that was at odds with how I wanted to read it. But, despite that, the strength of this book are the characters. Viv is a good person who helps where she can, which sounds like a simple thing but is so restful to read about. This book features her first meeting with the gnome Gallina, who is always a delight. There are delicious baked goods from a dwarf, a grumpy doctor, a grumpier city guard, and a whole cast of people who felt complicated and normal and essentially decent. I'm not sure the fantasy elements do anything for this book, or this series, other than marketing and the convenience of a few plot devices. Even though one character literally disappears into a satchel, it felt like Baldree could have written roughly the same story as a contemporary novel without a hint of genre. But that's not really a complaint, since the marketing works. I would not have read this series if it had been contemporary novels, and I thoroughly enjoyed it. It's a slice of life novel about kind and decent people for readers who are bored by contemporary settings and would rather read fantasy. Works for me. I'm hoping Baldree finds other stories, since I'm not sure I want to read this one several more times, but twice was not too much. If you liked Legends & Lattes and are thinking "how can I get more of that," here's the book for you. If you haven't read Legends & Lattes, I think I would recommend reading this one first. It does many of the same things, it's a bit more polished, and then you can read Viv's adventures in internal chronological order. Rating: 8 out of 10

22 December 2023

Gunnar Wolf: Pushing some reviews this way

Over roughly the last year and a half I have been participating as a reviewer in ACM s Computing Reviews, and have even been honored as a Featured Reviewer. Given I have long enjoyed reading friends reviews of their reading material (particularly, hats off to the very active Russ Allbery, who both beats all of my frequency expectations (I could never sustain the rythm he reads to!) and holds documented records for his >20 years as a book reader, with far more clarity and readability than I can aim for!), I decided to explicitly share my reviews via this blog, as the audience is somewhat congruent; I will also link here some reviews that were not approved for publication, clearly marking them so. I will probably work on wrangling my Jekyll site to display an (auto-)updated page and RSS feed for the reviews. In the meantime, the reviews I have published are:

Russ Allbery: Review: Wintersmith

Review: Wintersmith, by Terry Pratchett
Series: Discworld #35
Publisher: Clarion Books
Copyright: 2006
Printing: 2007
ISBN: 0-06-089033-9
Format: Mass market
Pages: 450
Wintersmith is the 35th Discworld novel and the 3rd Tiffany Aching novel. You could probably start here, since understanding the backstory isn't vital for following the plot, but I'm not sure why you would. Tiffany is now training with Miss Treason, a 113-year-old witch who is quite different in her approach from Miss Level, Tiffany's mentor in A Hat Full of Sky. Miss Level was the unassuming and constantly helpful glue that held the neighborhood together. Miss Treason is the judge; her neighbors are scared of her and proud of being scared of her, since that means they have a proper witch who can see into their heads and sort out their problems. On the surface, they're quite different; part of the story of this book is Tiffany learning to see the similarities. First, though, Miss Treason rushes Tiffany to a strange midnight Morris Dance, without any explanation. The Morris Dance usually celebrates the coming of spring and is at the center of a village party, so Tiffany is quite confused by seeing it danced on a dark and windy night in late autumn. But there is a hole in the dance where the Fool normally is, and Tiffany can't keep herself from joining it. This proves to be a mistake. That space was left for someone very different from Tiffany, and now she's entangled herself in deep magic that she doesn't understand. This is another Pratchett novel where the main storyline didn't do much for me. All the trouble stems from Miss Treason being maddeningly opaque, and while she did warn Tiffany, she did so in that way that guarantees a protagonist of a middle-grade novel will ignore. The Wintersmith is a boring, one-note quasi-villain, and the plot mainly revolves around elemental powers being dumber than a sack of hammers. The one thing I will say about the main plot is that the magic Tiffany danced into is entangled with courtship and romance, Tiffany turns thirteen over the course of this book, and yet this is not weird and uncomfortable reading the way it would be in the hands of many other authors. Pratchett has a keen eye for the age range that he's targeting. The first awareness that there is such a thing as romance that might be relevant to oneself pairs nicely with the Wintersmith's utter confusion at how Tiffany's intrusion unbalanced his dance. This is a very specific age and experience that I think a lot of authors would shy away from, particularly with a female protagonist, and I thought Pratchett handled it adroitly. I personally found the Wintersmith's awkward courting tedious and annoying, but that's more about me than about the book. As with A Hat Full of Sky, though, everything other than the main plot was great. It is becoming obvious how much Tiffany and Granny Weatherwax have in common, and that Granny Weatherwax recognizes this and is training Tiffany herself. This is high-quality coming-of-age material, not in the traditional fantasy sense of chosen ones and map explorations, but in the sense of slowly-developing empathy and understanding of people who think differently than you do. Tiffany, like Granny Weatherwax, has very little patience with nonsense, and her irritation with stupidity is one of her best characteristics. But she's learning how to blunt it long enough to pay attention, and to understand how people she doesn't like can still be the right people for specific situations. I particularly loved how Granny carries on with a feud at the same time that Tiffany is learning to let go of one. It's not a contradiction or hypocrisy; it's a sign that Tiffany is entitled to her judgments and feelings, but has to learn how to keep them in their place and not let them take over. One of the great things about the Tiffany Aching books is that the villages are also characters. We don't see that much of the individual people, but one of the things Tiffany is learning is how to see the interpersonal dynamics and patterns of village life. Somehow the feelings of irritation and exasperation fade once you understand people's motives and see more sides to their character. There is a lot more Nanny Ogg in this book than there has been in the last few, and that reminded me of how much I love her character. She has a completely different approach than Granny Weatherwax, but it's just as effective in different ways. She's also the perfect witch to have around when you've stumbled into a stylized love story that you don't want to be a part of, and yet find oddly fascinating. It says something about the skill of Pratchett's characterization that I could enjoy a book this much while having no interest in the main plot. The Witches have always been great characters, but somehow they're even better when seen through Tiffany's perspective. Good stuff; if you liked any of the other Tiffany Aching books, you will like this as well. Followed by Making Money in publication order. The next Tiffany Aching novel is I Shall Wear Midnight. Rating: 8 out of 10

21 December 2023

Russ Allbery: Review: The Box

Review: The Box, by Marc Levinson
Publisher: Princeton University Press
Copyright: 2006, 2008
Printing: 2008
ISBN: 0-691-13640-8
Format: Trade paperback
Pages: 278
The shipping container as we know it is only about 65 years old. Shipping things in containers is obviously much older; we've been doing that for longer than we've had ships. But the standardized metal box, set on a rail car or loaded with hundreds of its indistinguishable siblings into an enormous, specially-designed cargo ship, became economically significant only recently. Today it is one of the oft-overlooked foundations of global supply chains. The startlingly low cost of container shipping is part of why so much of what US consumers buy comes from Asia, and why most complex machinery is assembled in multiple countries from parts gathered from a dizzying variety of sources. Marc Levinson's The Box is a history of container shipping, from its (arguable) beginnings in the trailer bodies loaded on Pan-Atlantic Steamship Corporation's Ideal-X in 1956 to just-in-time international supply chains in the 2000s. It's a popular history that falls on the academic side, with a full index and 60 pages of citations and other notes. (Per my normal convention, those pages aren't included in the sidebar page count.) The Box is organized mostly chronologically, but Levinson takes extended detours into labor relations and container standardization at the appropriate points in the timeline. The book is very US-centric. Asian, European, and Australian shipping is discussed mostly in relation to trade with the US, and Africa is barely mentioned. I don't have the background to know whether this is historically correct for container shipping or is an artifact of Levinson's focus. Many single-item popular histories focus on something that involves obvious technological innovation (paint pigments) or deep cultural resonance (salt) or at least entertaining quirkiness (punctuation marks, resignation letters). Shipping containers are important but simple and boring. The least interesting chapter in The Box covers container standardization, in which a whole bunch of people had boring meetings, wrote some things done, discovered many of the things they wrote down were dumb, wrote more things down, met with different people to have more meetings, published a standard that partly reflected the fixations of that one guy who is always involved in standards discussions, and then saw that standard be promptly ignored by the major market players. You may be wondering if that describes the whole book. It doesn't, but not because of the shipping containers. The Box is interesting because the process of economic change is interesting, and container shipping is almost entirely about business processes rather than technology. Levinson starts the substance of the book with a description of shipping before standardized containers. This was the most effective, and probably the most informative, chapter. Beyond some vague ideas picked up via cultural osmosis, I had no idea how cargo shipping worked. Levinson gives the reader a memorable feel for the sheer amount of physical labor involved in loading and unloading a ship with mixed cargo (what's called "breakbulk" cargo to distinguish it from bulk cargo like coal or wheat that fills an entire hold). It's not just the effort of hauling barrels, bales, or boxes with cranes or raw muscle power, although that is significant. It's also the need to touch every piece of cargo to move it, inventory it, warehouse it, and then load it on a truck or train. The idea of container shipping is widely attributed, including by Levinson, to Malcom McLean, a trucking magnate who became obsessed with the idea of what we now call intermodal transport: using the same container for goods on ships, railroads, and trucks so that the contents don't have to be unpacked and repacked at each transfer point. Levinson uses his career as an anchor for the story, from his acquisition of Pan-American Steamship Corporation to pursue his original idea (backed by private equity and debt, in a very modern twist), through his years running Sea-Land as the first successful major container shipper, and culminating in his disastrous attempted return to shipping by acquiring United States Lines. I am dubious of Great Man narratives in history books, and I think Levinson may be overselling McLean's role. Container shipping was an obvious idea that the industry had been talking about for decades. Even Levinson admits that, despite a few gestures at giving McLean central credit. Everyone involved in shipping understood that cargo handling was the most expensive and time-consuming part, and that if one could minimize cargo handling at the docks by loading and unloading full containers that didn't have to be opened, shipping costs would be much lower (and profits higher). The idea wasn't the hard part. McLean was the first person to pull it off at scale, thanks to some audacious economic risks and a willingness to throw sharp elbows and play politics, but it seems likely that someone else would have played that role if McLean hadn't existed. Container shipping didn't happen earlier because achieving that cost savings required a huge expenditure of capital and a major disruption of a transportation industry that wasn't interested in being disrupted. The ships had to be remodeled and eventually replaced; manufacturing had to change; railroad and trucking in theory had to change (in practice, intermodal transport; McLean's obsession, didn't happen at scale until much later); pricing had to be entirely reworked; logistical tracking of goods had to be done much differently; and significant amounts of extremely expensive equipment to load and unload heavy containers had to be designed, built, and installed. McLean's efforts proved the cost savings was real and compelling, but it still took two decades before the shipping industry reconstructed itself around containers. That interim period is where this history becomes a labor story, and that's where Levinson's biases become somewhat distracting. In the United States, loading and unloading of cargo ships was done by unionized longshoremen through a bizarre but complex and long-standing system of contract hiring. The cost savings of container shipping comes almost completely from the loss of work for longshoremen. It's a classic replacement of labor with capital; the work done by gangs of twenty or more longshoreman is instead done by a single crane operator at much higher speed and efficiency. The longshoreman unions therefore opposed containerization and launched numerous strikes and other labor actions to delay use of containers, force continued hiring that containers made unnecessary, or win buyouts and payoffs for current longshoremen. Levinson is trying to write a neutral history and occasionally shows some sympathy for longshoremen, but they still get the Luddite treatment in this book: the doomed reactionaries holding back progress. Longshoremen had a vigorous and powerful union that won better working conditions structured in ways that look absurd to outsiders, such as requiring that ships hire twice as many men as necessary so that half of them could get paid while not working. The unions also had a reputation for corruption that Levinson stresses constantly, and theft of breakbulk cargo during loading and warehousing was common. One of the interesting selling points for containers was that lossage from theft during shipping apparently decreased dramatically. It's obvious that the surface demand of the longshoremen unions, that either containers not be used or that just as many manual laborers be hired for container shipping as for earlier breakbulk shipping, was impossible, and that the profession as it existed in the 1950s was doomed. But beneath those facts, and the smoke screen of Levinson's obvious distaste for their unions, is a real question about what society owes workers whose jobs are eliminated by major shifts in business practices. That question of fairness becomes more pointed when one realizes that this shift was massively subsidized by US federal and local governments. McLean's Sea-Land benefited from direct government funding and subsidized navy surplus ships, massive port construction in New Jersey with public funds, and a sweetheart logistics contract from the US military to supply troops fighting the Vietnam War that was so generous that the return voyage was free and every container Sea-Land picked up from Japanese ports was pure profit. The US shipping industry was heavily government-supported, particularly in the early days when the labor conflicts were starting. Levinson notes all of this, but never draws the contrast between the massive support for shipping corporations and the complete lack of formal support for longshoremen. There are hard ethical questions about what society owes displaced workers even in a pure capitalist industry transformation, and this was very far from pure capitalism. The US government bankrolled large parts of the growth of container shipping, but the only way that longshoremen could get part of that money was through strikes to force payouts from private shipping companies. There are interesting questions of social and ethical history here that would require careful disentangling of the tendency of any group to oppose disruptive change and fairness questions of who gets government support and who doesn't. They will have to wait for another book; Levinson never mentions them. There were some things about this book that annoyed me, but overall it's a solid work of popular history and deserves its fame. Levinson's account is easy to follow, specific without being tedious, and backed by voluminous notes. It's not the most compelling story on its own merits; you have to have some interest in logistics and economics to justify reading the entire saga. But it's the sort of history that gives one a sense of the fractal complexity of any area of human endeavor, and I usually find those worth reading. Recommended if you like this sort of thing. Rating: 7 out of 10

20 December 2023

Melissa Wen: The Rainbow Treasure Map Talk: Advanced color management on Linux with AMD/Steam Deck.

Last week marked a major milestone for me: the AMD driver-specific color management properties reached the upstream linux-next! And to celebrate, I m happy to share the slides notes from my 2023 XDC talk, The Rainbow Treasure Map along with the individual recording that just dropped last week on youtube talk about happy coincidences!

Steam Deck Rainbow: Treasure Map & Magic Frogs While I may be bubbly and chatty in everyday life, the stage isn t exactly my comfort zone (hallway talks are more my speed). But the journey of developing the AMD color management properties was so full of discoveries that I simply had to share the experience. Witnessing the fantastic work of Jeremy and Joshua bring it all to life on the Steam Deck OLED was like uncovering magical ingredients and whipping up something truly enchanting. For XDC 2023, we split our Rainbow journey into two talks. My focus, The Rainbow Treasure Map, explored the new color features we added to the Linux kernel driver, diving deep into the hardware capabilities of AMD/Steam Deck. Joshua then followed with The Rainbow Frogs and showed the breathtaking color magic released on Gamescope thanks to the power unlocked by the kernel driver s Steam Deck color properties.

Packing a Rainbow into 15 Minutes I had so much to tell, but a half-slot talk meant crafting a concise presentation. To squeeze everything into 15 minutes (and calm my pre-talk jitters a bit!), I drafted and practiced those slides and notes countless times. So grab your map, and let s embark on the Rainbow journey together! Slide 1: The Rainbow Treasure Map - Advanced Color Management on Linux with AMD/SteamDeck Intro: Hi, I m Melissa from Igalia and welcome to the Rainbow Treasure Map, a talk about advanced color management on Linux with AMD/SteamDeck. Slide 2: List useful links for this technical talk Useful links: First of all, if you are not used to the topic, you may find these links useful.
  1. XDC 2022 - I m not an AMD expert, but - Melissa Wen
  2. XDC 2022 - Is HDR Harder? - Harry Wentland
  3. XDC 2022 Lightning - HDR Workshop Summary - Harry Wentland
  4. Color management and HDR documentation for FOSS graphics - Pekka Paalanen et al.
  5. Cinematic Color - 2012 SIGGRAPH course notes - Jeremy Selan
  6. AMD Driver-specific Properties for Color Management on Linux (Part 1) - Melissa Wen
Slide 3: Why do we need advanced color management on Linux? Context: When we talk about colors in the graphics chain, we should keep in mind that we have a wide variety of source content colorimetry, a variety of output display devices and also the internal processing. Users expect consistent color reproduction across all these devices. The userspace can use GPU-accelerated color management to get it. But this also requires an interface with display kernel drivers that is currently missing from the DRM/KMS framework. Slide 4: Describe our work on AMD driver-specific color properties Since April, I ve been bothering the DRM community by sending patchsets from the work of me and Joshua to add driver-specific color properties to the AMD display driver. In parallel, discussions on defining a generic color management interface are still ongoing in the community. Moreover, we are still not clear about the diversity of color capabilities among hardware vendors. To bridge this gap, we defined a color pipeline for Gamescope that fits the latest versions of AMD hardware. It delivers advanced color management features for gamut mapping, HDR rendering, SDR on HDR, and HDR on SDR. Slide 5: Describe the AMD/SteamDeck - our hardware AMD/Steam Deck hardware: AMD frequently releases new GPU and APU generations. Each generation comes with a DCN version with display hardware improvements. Therefore, keep in mind that this work uses the AMD Steam Deck hardware and its kernel driver. The Steam Deck is an APU with a DCN3.01 display driver, a DCN3 family. It s important to have this information since newer AMD DCN drivers inherit implementations from previous families but aldo each generation of AMD hardware may introduce new color capabilities. Therefore I recommend you to familiarize yourself with the hardware you are working on. Slide 6: Diagram with the three layers of the AMD display driver on Linux The AMD display driver in the kernel space: It consists of three layers, (1) the DRM/KMS framework, (2) the AMD Display Manager, and (3) the AMD Display Core. We extended the color interface exposed to userspace by leveraging existing DRM resources and connecting them using driver-specific functions for color property management. Slide 7: Three-layers diagram highlighting AMD Display Manager, DM - the layer that connects DC and DRM Bridging DC color capabilities and the DRM API required significant changes in the color management of AMD Display Manager - the Linux-dependent part that connects the AMD DC interface to the DRM/KMS framework. Slide 8: Three-layers diagram highlighting AMD Display Core, DC - the shared code The AMD DC is the OS-agnostic layer. Its code is shared between platforms and DCN versions. Examining this part helps us understand the AMD color pipeline and hardware capabilities, since the machinery for hardware settings and resource management are already there. Slide 9: Diagram of the AMD Display Core Next architecture with main elements and data flow The newest architecture for AMD display hardware is the AMD Display Core Next. Slide 10: Diagram of the AMD Display Core Next where only DPP and MPC blocks are highlighted In this architecture, two blocks have the capability to manage colors:
  • Display Pipe and Plane (DPP) - for pre-blending adjustments;
  • Multiple Pipe/Plane Combined (MPC) - for post-blending color transformations.
Let s see what we have in the DRM API for pre-blending color management. Slide 11: Blank slide with no content only a title 'Pre-blending: DRM plane' DRM plane color properties: This is the DRM color management API before blending. Nothing! Except two basic DRM plane properties: color_encoding and color_range for the input colorspace conversion, that is not covered by this work. Slide 12: Diagram with color capabilities and structures in AMD DC layer without any DRM plane color interface (before blending), only the DRM CRTC color interface for post blending In case you re not familiar with AMD shared code, what we need to do is basically draw a map and navigate there! We have some DRM color properties after blending, but nothing before blending yet. But much of the hardware programming was already implemented in the AMD DC layer, thanks to the shared code. Slide 13: Previous Diagram with a rectangle to highlight the empty space in the DRM plane interface that will be filled by AMD plane properties Still both the DRM interface and its connection to the shared code were missing. That s when the search begins! Slide 14: Color Pipeline Diagram with the plane color interface filled by AMD plane properties but without connections to AMD DC resources AMD driver-specific color pipeline: Looking at the color capabilities of the hardware, we arrive at this initial set of properties. The path wasn t exactly like that. We had many iterations and discoveries until reached to this pipeline. Slide 15: Color Pipeline Diagram connecting AMD plane degamma properties, LUT and TF, to AMD DC resources The Plane Degamma is our first driver-specific property before blending. It s used to linearize the color space from encoded values to light linear values. Slide 16: Describe plane degamma properties and hardware capabilities We can use a pre-defined transfer function or a user lookup table (in short, LUT) to linearize the color space. Pre-defined transfer functions for plane degamma are hardcoded curves that go to a specific hardware block called DPP Degamma ROM. It supports the following transfer functions: sRGB EOTF, BT.709 inverse OETF, PQ EOTF, and pure power curves Gamma 2.2, Gamma 2.4 and Gamma 2.6. We also have a one-dimensional LUT. This 1D LUT has four thousand ninety six (4096) entries, the usual 1D LUT size in the DRM/KMS. It s an array of drm_color_lut that goes to the DPP Gamma Correction block. Slide 17: Color Pipeline Diagram connecting AMD plane CTM property to AMD DC resources We also have now a color transformation matrix (CTM) for color space conversion. Slide 18: Describe plane CTM property and hardware capabilities It s a 3x4 matrix of fixed points that goes to the DPP Gamut Remap Block. Both pre- and post-blending matrices were previously gone to the same color block. We worked on detaching them to clear both paths. Now each CTM goes on its own way. Slide 19: Color Pipeline Diagram connecting AMD plane HDR multiplier property to AMD DC resources Next, the HDR Multiplier. HDR Multiplier is a factor applied to the color values of an image to increase their overall brightness. Slide 20: Describe plane HDR mult property and hardware capabilities This is useful for converting images from a standard dynamic range (SDR) to a high dynamic range (HDR). As it can range beyond [0.0, 1.0] subsequent transforms need to use the PQ(HDR) transfer functions. Slide 21: Color Pipeline Diagram connecting AMD plane shaper properties, LUT and TF, to AMD DC resources And we need a 3D LUT. But 3D LUT has a limited number of entries in each dimension, so we want to use it in a colorspace that is optimized for human vision. It means in a non-linear space. To deliver it, userspace may need one 1D LUT before 3D LUT to delinearize content and another one after to linearize content again for blending. Slide 22: Describe plane shaper properties and hardware capabilities The pre-3D-LUT curve is called Shaper curve. Unlike Degamma TF, there are no hardcoded curves for shaper TF, but we can use the AMD color module in the driver to build the following shaper curves from pre-defined coefficients. The color module combines the TF and the user LUT values into the LUT that goes to the DPP Shaper RAM block. Slide 23: Color Pipeline Diagram connecting AMD plane 3D LUT property to AMD DC resources Finally, our rockstar, the 3D LUT. 3D LUT is perfect for complex color transformations and adjustments between color channels. Slide 24: Describe plane 3D LUT property and hardware capabilities 3D LUT is also more complex to manage and requires more computational resources, as a consequence, its number of entries is usually limited. To overcome this restriction, the array contains samples from the approximated function and values between samples are estimated by tetrahedral interpolation. AMD supports 17 and 9 as the size of a single-dimension. Blue is the outermost dimension, red the innermost. Slide 25: Color Pipeline Diagram connecting AMD plane blend properties, LUT and TF, to AMD DC resources As mentioned, we need a post-3D-LUT curve to linearize the color space before blending. This is done by Blend TF and LUT. Slide 26: Describe plane blend properties and hardware capabilities Similar to shaper TF, there are no hardcoded curves for Blend TF. The pre-defined curves are the same as the Degamma block, but calculated by the color module. The resulting LUT goes to the DPP Blend RAM block. Slide 27: Color Pipeline Diagram  with all AMD plane color properties connect to AMD DC resources and links showing the conflict between plane and CRTC degamma Now we have everything connected before blending. As a conflict between plane and CRTC Degamma was inevitable, our approach doesn t accept that both are set at the same time. Slide 28: Color Pipeline Diagram connecting AMD CRTC gamma TF property to AMD DC resources We also optimized the conversion of the framebuffer to wire encoding by adding support to pre-defined CRTC Gamma TF. Slide 29: Describe CRTC gamma TF property and hardware capabilities Again, there are no hardcoded curves and TF and LUT are combined by the AMD color module. The same types of shaper curves are supported. The resulting LUT goes to the MPC Gamma RAM block. Slide 30: Color Pipeline Diagram with all AMD driver-specific color properties connect to AMD DC resources Finally, we arrived in the final version of DRM/AMD driver-specific color management pipeline. With this knowledge, you re ready to better enjoy the rainbow treasure of AMD display hardware and the world of graphics computing. Slide 31: SteamDeck/Gamescope Color Pipeline Diagram with rectangles labeling each block of the pipeline with the related AMD color property With this work, Gamescope/Steam Deck embraces the color capabilities of the AMD GPU. We highlight here how we map the Gamescope color pipeline to each AMD color block. Slide 32: Final slide. Thank you! Future works: The search for the rainbow treasure is not over! The Linux DRM subsystem contains many hidden treasures from different vendors. We want more complex color transformations and adjustments available on Linux. We also want to expose all GPU color capabilities from all hardware vendors to the Linux userspace. Thanks Joshua and Harry for this joint work and the Linux DRI community for all feedback and reviews. The amazing part of this work comes in the next talk with Joshua and The Rainbow Frogs! Any questions?
References:
  1. Slides of the talk The Rainbow Treasure Map.
  2. Youtube video of the talk The Rainbow Treasure Map.
  3. Patch series for AMD driver-specific color management properties (upstream Linux 6.8v).
  4. SteamDeck/Gamescope color management pipeline
  5. XDC 2023 website.
  6. Igalia website.

19 December 2023

Jonathan Dowland: William Basinski, Gateshead, 2022

I was looking over the list of live music I'd seen this year and realised that avante-garde composer William Basinski was actually last year and I had forgotten to write about it! In November 2022, Basinski headlined a night of performances which otherwise featured folk from the venue's "Arists in Residence" programme, with some affiliation to Newcastle's DIY music scene. Unfortunately we arrived too late to catch any of the other acts: partly because of the venue's sometimes doggiest insistence that people can only enter or leave the halls during intervals, and partly because the building works surrounding it had made the southern entrance effectively closed, so we had to walk to the north side of the building to get in1. Basinski was performing work from Lamentations. Basinski himself presented very unexpectedly to how I imagined him: he's got the Texas drawl, mediated through a fair amount of time spent in New York; very camp, in a glittery top; he kicked off the gig complaining about how tired he was, before a mini rant about the state of the world, riffing on a title from the album: Please, This Shit Has Got To Stop. We were in Hall 1, the larger of the two, and it was sparsely attended; a few people walked out mid performance. My gig-buddy Rob (a useful barometer for me on how things have gone) remarked that it was one of the most unique and unusual gigs he'd been to. I recognised snatches of the tracks from the album, but I'm hard-placed to name or sequence them, and they flowed into each other. I don't know how much of what we were hearing was "live" or what, if anything, was being decided during the performance, but Basinski's set-up included what looked like archaic tape equipment, with exposed loops of tape running between spools that could be interfered with by other tools. The encore was a unique, unreleased mix of Melancholia (II), which (making no apologies) Basinski hit play on before retiring backstage. I didn't take any photos. From memory, I think the venue had specifically stated filming or photos were not allowed for this performance. People at prior shows in New York and London filmed some of their shows; which were substantially similar: I've included embeds of them above. Lots of Basinski's work is on Bandcamp; the three pieces I particularly enjoy are Lamentations, Lamentations by William Basinski On Time Out of Time, On Time Out of Time by William Basinski and his best-known work, The Disintegration Loops. The Disintegration Loops by William Basinski

  1. I don't want to speak ill of the venue, though: The Sage, as it was, and the Glasshouse, as it is now known, has ended up being the venue I've attended most this year (2023), and it's such a civilised place: plenty of bars, great drinks selection (both alcoholic and not, hot and cold), loads of clean toilets, a free cloakroom, fantastic accoustics, polite staff; the list goes on.

16 December 2023

Thomas Lange: Adding a writeable data partition to an ISO image

Some years ago a customer needed a live ISO containing a customized FAI environment (not for installing but for extended hardware stress tests), but on an USB stick with the possibility to store the logs of the tests on the USB stick. But an ISO file system (iso9660) remains read-only, even when put onto an USB stick. I had the idea to add another partition onto the USB stick after the ISO was written to it (using cp or dd). You can use fdisk with an ISO file, add a new partition, loop mount the ISO and format this partition. That's all. This worked perfect for my customer. I forgot this idea for a while but a few weeks ago I remembered it. What could be possible when my FAI (Fully Automatic Installation) image would also provide such a partition? Which things could be provided on this partition? Could I provide a FAI ISO and my users would be able to easily put their own .deb package onto it without remastering the ISO or building an ISO on their own? Now here's the shell script, that extends an ISO or an USB stick with an ext4 or exFAT partition and set the file system label to MY-DATA. https://github.com/faiproject/fai/blob/master/bin/mk-data-partition Examples how to use mk-data-partition
Add a data partition of size 1G to the Debian installer ISO using an ext4 partition
# mk-data-partition -s 1G debian-12.2.0-amd64-netinst.iso
Create the data partition using an exFAT file system on USB named /dev/sdb.
First copy (or dd) the ISO onto the USB stick. Then add the data partition
to the USB stick.
# cp faicd64-large_6.0.3.iso /dev/sdb
# mk-data-partition -F /dev/sdb
Create the data partition and copy directories A and B to it
# mk-data-partition -c debian-12.2.0-amd64-netinst.iso A B
The next FAI version will use this in different parts of an installation. A blog post about this will follow. A new idea for our Debian installer ISO Here are my ideas how the Debian installer could use such a partition if it automatically detects and mounts it (by it's file system label): The advantage of this approach is that there's no need for the user to remaster the official Debian installer ISO, which is not easy for end users. We only have to extend the installer to use files from this data partition in some portions of the installation. Additional udebs, packages or firmware could automatically be used by the installer. Companies could easily create an OEM installer of Debian. What do you think about this idea? Please send feedback to lange@debian.org

14 December 2023

Russell Coker: Fat Finger Shell

I ve been trying out the Fat Finger Shell which is a terminal emulator for Linux on touch screen devices where the keyboard is overlayed with the terminal output. This means that instead of having a tiny keyboard and a tiny terminal output you have the full screen for both. There is a YouTube video showing how the Fat Finger Shell works [1]. Here is a link to the Github page [2], which hasn t changed much in the last 11 years. Currently the shell is hard-coded to a 80*24 terminal and a 640*480 screen which doesn t match any modern hardware. Some parts of this are easy to change but then there s the comment I ran once XGetGeometry and I am harcoded (bad) values for x, y, etc.. which is followed by some magic numbers that are not easy to change which are hacked into the source of xvt. The configuration of this is almost great. It has a plain text file where each line has 4 numbers representing the X and Y coordinates of opposite corners of a rectangle and additional information on what the key is, which is relatively easy to edit. But then it has an image which has to match that, the obvious improvement would be to not have an image but to just display rectangles for each pair of corner coordinates and display the glyph of the character in question inside it. I think there is a real need for a terminal like this for use on devices like the PinePhonePro, it won t be to everyone s taste but the people who like it will really like it. The features that such a shell needs for modern use are being based on Wayland, supporting a variety of screen resolutions and particularly the commonly used ones like 720*1440 and 1920*1080 (with terminal resolution matching the combination of screen resolution and font), and having code derived from a newer terminal emulator. As a final note it would be good for such a terminal to also take input from a regular keyboard so when you plug your Linux phone into a dock you don t need to close your existing terminal sessions. There is a Debian RFP/ITP bug for this [3] which I think should be closed due to nothing happening for 11 years and the fact that so much work is required to make this usable. The current Fat Finger Shell code is a good demonstration of the concept, but I don t think it makes sense to move on with this code base. One of the many possible ways of addressing this with modern graphics technology might be to have a semi transparent window overlaying the screen and generating virtual keyboard events for whichever window happens to be below it so instead of being limited to one terminal program by the choice of input method have that input work for any terminal that the user may choose as well as any other text based program (email, IM, etc).

12 December 2023

Raju Devidas: Nextcloud AIO install with docker-compose and nginx reverse proxy

Nextcloud AIO install with docker-compose and nginx reverse proxyNextcloud is a popular self-hosted solution for file sync and share as well as cloud apps such as document editing, chat and talk, calendar, photo gallery etc. This guide will walk you through setting up Nextcloud AIO using Docker Compose. This blog post would not be possible without immense help from Sahil Dhiman a.k.a. sahilisterThere are various ways in which the installation could be done, in our setup here are the pre-requisites.

Step 1 : The docker-compose file for nextcloud AIOThe original compose.yml file is present in nextcloud AIO&aposs git repo here . By taking a reference of that file, we have own compose.yml here.
services:
  nextcloud-aio-mastercontainer:
    image: nextcloud/all-in-one:latest
    init: true
    restart: always
    container_name: nextcloud-aio-mastercontainer # This line is not allowed to be changed as otherwise AIO will not work correctly
    volumes:
      - nextcloud_aio_mastercontainer:/mnt/docker-aio-config # This line is not allowed to be changed as otherwise the built-in backup solution will not work
      - /var/run/docker.sock:/var/run/docker.sock:ro # May be changed on macOS, Windows or docker rootless. See the applicable documentation. If adjusting, don&apost forget to also set &aposWATCHTOWER_DOCKER_SOCKET_PATH&apos!
    ports:
      - 8080:8080
    environment: # Is needed when using any of the options below
      # - AIO_DISABLE_BACKUP_SECTION=false # Setting this to true allows to hide the backup section in the AIO interface. See https://github.com/nextcloud/all-in-one#how-to-disable-the-backup-section
      - APACHE_PORT=32323 # Is needed when running behind a web server or reverse proxy (like Apache, Nginx, Cloudflare Tunnel and else). See https://github.com/nextcloud/all-in-one/blob/main/reverse-proxy.md
      - APACHE_IP_BINDING=127.0.0.1 # Should be set when running behind a web server or reverse proxy (like Apache, Nginx, Cloudflare Tunnel and else) that is running on the same host. See https://github.com/nextcloud/all-in-one/blob/main/reverse-proxy.md
      # - BORG_RETENTION_POLICY=--keep-within=7d --keep-weekly=4 --keep-monthly=6 # Allows to adjust borgs retention policy. See https://github.com/nextcloud/all-in-one#how-to-adjust-borgs-retention-policy
      # - COLLABORA_SECCOMP_DISABLED=false # Setting this to true allows to disable Collabora&aposs Seccomp feature. See https://github.com/nextcloud/all-in-one#how-to-disable-collaboras-seccomp-feature
      - NEXTCLOUD_DATADIR=/opt/docker/cloud.raju.dev/nextcloud # Allows to set the host directory for Nextcloud&aposs datadir.   Warning: do not set or adjust this value after the initial Nextcloud installation is done! See https://github.com/nextcloud/all-in-one#how-to-change-the-default-location-of-nextclouds-datadir
      # - NEXTCLOUD_MOUNT=/mnt/ # Allows the Nextcloud container to access the chosen directory on the host. See https://github.com/nextcloud/all-in-one#how-to-allow-the-nextcloud-container-to-access-directories-on-the-host
      # - NEXTCLOUD_UPLOAD_LIMIT=10G # Can be adjusted if you need more. See https://github.com/nextcloud/all-in-one#how-to-adjust-the-upload-limit-for-nextcloud
      # - NEXTCLOUD_MAX_TIME=3600 # Can be adjusted if you need more. See https://github.com/nextcloud/all-in-one#how-to-adjust-the-max-execution-time-for-nextcloud
      # - NEXTCLOUD_MEMORY_LIMIT=512M # Can be adjusted if you need more. See https://github.com/nextcloud/all-in-one#how-to-adjust-the-php-memory-limit-for-nextcloud
      # - NEXTCLOUD_TRUSTED_CACERTS_DIR=/path/to/my/cacerts # CA certificates in this directory will be trusted by the OS of the nexcloud container (Useful e.g. for LDAPS) See See https://github.com/nextcloud/all-in-one#how-to-trust-user-defined-certification-authorities-ca
      # - NEXTCLOUD_STARTUP_APPS=deck twofactor_totp tasks calendar contacts notes # Allows to modify the Nextcloud apps that are installed on starting AIO the first time. See https://github.com/nextcloud/all-in-one#how-to-change-the-nextcloud-apps-that-are-installed-on-the-first-startup
      # - NEXTCLOUD_ADDITIONAL_APKS=imagemagick # This allows to add additional packages to the Nextcloud container permanently. Default is imagemagick but can be overwritten by modifying this value. See https://github.com/nextcloud/all-in-one#how-to-add-os-packages-permanently-to-the-nextcloud-container
      # - NEXTCLOUD_ADDITIONAL_PHP_EXTENSIONS=imagick # This allows to add additional php extensions to the Nextcloud container permanently. Default is imagick but can be overwritten by modifying this value. See https://github.com/nextcloud/all-in-one#how-to-add-php-extensions-permanently-to-the-nextcloud-container
      # - NEXTCLOUD_ENABLE_DRI_DEVICE=true # This allows to enable the /dev/dri device in the Nextcloud container.   Warning: this only works if the &apos/dev/dri&apos device is present on the host! If it should not exist on your host, don&apost set this to true as otherwise the Nextcloud container will fail to start! See https://github.com/nextcloud/all-in-one#how-to-enable-hardware-transcoding-for-nextcloud
      # - NEXTCLOUD_KEEP_DISABLED_APPS=false # Setting this to true will keep Nextcloud apps that are disabled in the AIO interface and not uninstall them if they should be installed. See https://github.com/nextcloud/all-in-one#how-to-keep-disabled-apps
      # - TALK_PORT=3478 # This allows to adjust the port that the talk container is using. See https://github.com/nextcloud/all-in-one#how-to-adjust-the-talk-port
      # - WATCHTOWER_DOCKER_SOCKET_PATH=/var/run/docker.sock # Needs to be specified if the docker socket on the host is not located in the default &apos/var/run/docker.sock&apos. Otherwise mastercontainer updates will fail. For macos it needs to be &apos/var/run/docker.sock&apos
    # networks: # Is needed when you want to create the nextcloud-aio network with ipv6-support using this file, see the network config at the bottom of the file
      # - nextcloud-aio # Is needed when you want to create the nextcloud-aio network with ipv6-support using this file, see the network config at the bottom of the file
      # - SKIP_DOMAIN_VALIDATION=true
    # # Uncomment the following line when using SELinux
    # security_opt: ["label:disable"]
volumes: # If you want to store the data on a different drive, see https://github.com/nextcloud/all-in-one#how-to-store-the-filesinstallation-on-a-separate-drive
  nextcloud_aio_mastercontainer:
    name: nextcloud_aio_mastercontainer # This line is not allowed to be changed as otherwise the built-in backup solution will not work
I have not removed many of the commented options in the compose file, for a possibility of me using them in the future.If you want a smaller cleaner compose with the extra options, you can refer to
services:
  nextcloud-aio-mastercontainer:
    image: nextcloud/all-in-one:latest
    init: true
    restart: always
    container_name: nextcloud-aio-mastercontainer
    volumes:
      - nextcloud_aio_mastercontainer:/mnt/docker-aio-config
      - /var/run/docker.sock:/var/run/docker.sock:ro
    ports:
      - 8080:8080
    environment:
      - APACHE_PORT=32323
      - APACHE_IP_BINDING=127.0.0.1
      - NEXTCLOUD_DATADIR=/opt/docker/nextcloud
volumes:
  nextcloud_aio_mastercontainer:
    name: nextcloud_aio_mastercontainer
I am using a separate directory to store nextcloud data. As per nextcloud documentation you should be using a separate partition if you want to use this feature, however I did not have that option on my server, so I used a separate directory instead. Also we use a custom port on which nextcloud listens for operations, we have set it up as 32323 above, but you can use any in the permissible port range. The 8080 port is used the setup the AIO management interface. Both 8080 and the APACHE_PORT do not need to be open on the host machine, as we will be using reverse proxy setup with nginx to direct requests. once you have your preferred compose.yml file, you can start the containers using
$ docker-compose -f compose.yml up -d 
Creating network "clouddev_default" with the default driver
Creating volume "nextcloud_aio_mastercontainer" with default driver
Creating nextcloud-aio-mastercontainer ... done
once your container&aposs are running, we can do the nginx setup.

Step 2: Configuring nginx reverse proxy for our domain on host. A reference nginx configuration for nextcloud AIO is given in the nextcloud git repository here . You can modify the configuration file according to your needs and setup. Here is configuration that we are using

map $http_upgrade $connection_upgrade  
    default upgrade;
    &apos&apos close;
 
server  
    listen 80;
    #listen [::]:80;            # comment to disable IPv6
    if ($scheme = "http")  
        return 301 https://$host$request_uri;
     
    listen 443 ssl http2;      # for nginx versions below v1.25.1
    #listen [::]:443 ssl http2; # for nginx versions below v1.25.1 - comment to disable IPv6
    # listen 443 ssl;      # for nginx v1.25.1+
    # listen [::]:443 ssl; # for nginx v1.25.1+ - keep comment to disable IPv6
    # http2 on;                                 # uncomment to enable HTTP/2        - supported on nginx v1.25.1+
    # http3 on;                                 # uncomment to enable HTTP/3 / QUIC - supported on nginx v1.25.0+
    # quic_retry on;                            # uncomment to enable HTTP/3 / QUIC - supported on nginx v1.25.0+
    # add_header Alt-Svc &aposh3=":443"; ma=86400&apos; # uncomment to enable HTTP/3 / QUIC - supported on nginx v1.25.0+
    # listen 443 quic reuseport;       # uncomment to enable HTTP/3 / QUIC - supported on nginx v1.25.0+ - please remove "reuseport" if there is already another quic listener on port 443 with enabled reuseport
    # listen [::]:443 quic reuseport;  # uncomment to enable HTTP/3 / QUIC - supported on nginx v1.25.0+ - please remove "reuseport" if there is already another quic listener on port 443 with enabled reuseport - keep comment to disable IPv6
    server_name cloud.example.com;
    location /  
        proxy_pass http://127.0.0.1:32323$request_uri;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Port $server_port;
        proxy_set_header X-Forwarded-Scheme $scheme;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header Accept-Encoding "";
        proxy_set_header Host $host;
    
        client_body_buffer_size 512k;
        proxy_read_timeout 86400s;
        client_max_body_size 0;
        # Websocket
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection $connection_upgrade;
     
    ssl_certificate /etc/letsencrypt/live/cloud.example.com/fullchain.pem; # managed by Certbot
    ssl_certificate_key /etc/letsencrypt/live/cloud.example.com/privkey.pem; # managed by Certbot
    ssl_session_timeout 1d;
    ssl_session_cache shared:MozSSL:10m; # about 40000 sessions
    ssl_session_tickets off;
    ssl_protocols TLSv1.2 TLSv1.3;
    ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:DHE-RSA-CHACHA20-POLY1305;
    ssl_prefer_server_ciphers on;
    # Optional settings:
    # OCSP stapling
    # ssl_stapling on;
    # ssl_stapling_verify on;
    # ssl_trusted_certificate /etc/letsencrypt/live/<your-nc-domain>/chain.pem;
    # replace with the IP address of your resolver
    # resolver 127.0.0.1; # needed for oscp stapling: e.g. use 94.140.15.15 for adguard / 1.1.1.1 for cloudflared or 8.8.8.8 for google - you can use the same nameserver as listed in your /etc/resolv.conf file
 
Please note that you need to have valid SSL certificates for your domain for this configuration to work. Steps on getting valid SSL certificates for your domain are beyond the scope of this article. You can give a web search on getting SSL certificates with letsencrypt and you will get several resources on that, or may write a blog post on it separately in the future.once your configuration for nginx is done, you can test the nginx configuration using
$ sudo nginx -t 
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
and then reload nginx with
$ sudo nginx -s reload

Step 3: Setup of Nextcloud AIO from the browser.To setup nextcloud AIO, we need to access it using the web browser on URL of our domain.tld:8080, however we do not want to open the 8080 port publicly to do this, so to complete the setup, here is a neat hack from sahilister
ssh -L 8080:127.0.0.1:8080 username:<server-ip>
you can bind the 8080 port of your server to the 8080 of your localhost using Unix socket forwarding over SSH.The port forwarding only last for the duration of your SSH session, if the SSH session breaks, your port forwarding will to. So, once you have the port forwarded, you can open the nextcloud AIO instance in your web browser at 127.0.0.1:8080
Nextcloud AIO install with docker-compose and nginx reverse proxy
you will get this error because you are trying to access a page on localhost over HTTPS. You can click on advanced and then continue to proceed to the next page. Your data is encrypted over SSH for this session as we are binding the port over SSH. Depending on your choice of browser, the above page might look different.once you have proceeded, the nextcloud AIO interface will open and will look something like this.
Nextcloud AIO install with docker-compose and nginx reverse proxynextcloud AIO initial screen with capsicums as password
It will show an auto generated passphrase, you need to save this passphrase and make sure to not loose it. For the purposes of security, I have masked the passwords with capsicums. once you have noted down your password, you can proceed to the Nextcloud AIO login, enter your password and then login. After login you will be greeted with a screen like this.
Nextcloud AIO install with docker-compose and nginx reverse proxy
now you can put the domain that you want to use in the Submit domain field. Once the domain check is done, you will proceed to the next step and see another screen like this
Nextcloud AIO install with docker-compose and nginx reverse proxy
here you can select any optional containers for the features that you might want. IMPORTANT: Please make sure to also change the time zone at the bottom of the page according to the time zone you wish to operate in.
Nextcloud AIO install with docker-compose and nginx reverse proxy
The timezone setup is also important because the data base will get initialized according to the set time zone. This could result in wrong initialization of database and you ending up in a startup loop for nextcloud. I faced this issue and could only resolve it after getting help from sahilister . Once you are done changing the timezone, and selecting any additional features you want, you can click on Download and start the containersIt will take some time for this process to finish, take a break and look at the farthest object in your room and take a sip of water. Once you are done, and the process has finished you will see a page similar to the following one.
Nextcloud AIO install with docker-compose and nginx reverse proxy
wait patiently for everything to turn green.
Nextcloud AIO install with docker-compose and nginx reverse proxy
once all the containers have started properly, you can open the nextcloud login interface on your configured domain, the initial login details are auto generated as you can see from the above screenshot. Again you will see a password that you need to note down or save to enter the nextcloud interface. Capsicums will not work as passwords. I have masked the auto generated passwords using capsicums.Now you can click on Open your Nextcloud button or go to your configured domain to access the login screen.
Nextcloud AIO install with docker-compose and nginx reverse proxy
You can use the login details from the previous step to login to the administrator account of your Nextcloud instance. There you have it, your very own cloud!

Additional Notes:

How to properly reset Nextcloud setup?While following the above steps, or while following steps from some other tutorial, you may have made a mistake, and want to start everything again from scratch. The instructions for it are present in the Nextcloud documentation here . Here is the TLDR for a docker-compose setup. These steps will delete all data, do not use these steps on an existing nextcloud setup unless you know what you are doing.
  • Stop your master container.
docker-compose -f compose.yml down -v
The above command will also remove the volume associated with the master container
  • Stop all the child containers that has been started by the master container.
docker stop nextcloud-aio-apache nextcloud-aio-notify-push nextcloud-aio-nextcloud nextcloud-aio-imaginary nextcloud-aio-fulltextsearch nextcloud-aio-redis nextcloud-aio-database nextcloud-aio-talk nextcloud-aio-collabora
  • Remove all the child containers that has been started by the master container
docker rm nextcloud-aio-apache nextcloud-aio-notify-push nextcloud-aio-nextcloud nextcloud-aio-imaginary nextcloud-aio-fulltextsearch nextcloud-aio-redis nextcloud-aio-database nextcloud-aio-talk nextcloud-aio-collabora
  • If you also wish to remove all images associated with nextcloud you can do it with
docker rmi $(docker images --filter "reference=nextcloud/*" -q)
  • remove all volumes associated with child containers
docker volume rm <volume-name>
  • remove the network associated with nextcloud
docker network rm nextcloud-aio

Additional references.
  1. Nextcloud Github
  2. Nextcloud reverse proxy documentation
  3. Nextcloud Administration Guide
  4. Nextcloud User Manual
  5. Nextcloud Developer&aposs manual

Freexian Collaborators: Monthly report about Debian Long Term Support, November 2023 (by Roberto C. S nchez)

Like each month, have a look at the work funded by Freexian s Debian LTS offering. Some notable fixes which were made in LTS during the month of November include the gnutls28 cryptographic library and the freerdp2 Remote Desktop Protocol client/server implementation. The gnutls28 update was prepared by LTS contributor Markus Koschany and dealt with a timing attack which could be used to compromise a cryptographic system, while the freerdp2 update was prepared by LTS contributor Tobias Frost and is the result of work spanning 3 months to deal with dozens of vulnerabilities. In addition to the many ordinary LTS tasks which were completed (CVE triage, patch backports, package updates, etc), there were several contributions by LTS contributors for the benefit of Debian stable and old-stable releases, as well as for the benefit of upstream projects. LTS contributor Abhijith PA uploaded an update of the puma package to unstable in order to fix a vulnerability in that package while LTS contributor Thosten Alteholz sponsored an upload to unstable of libde265 and himself made corresponding uploads of libde265 to Debian stable and old-stable. LTS contributor Bastien Roucari s developed patches for vulnerabilities in zbar and audiofile which were then provided to the respective upstream projects. Updates to packages in Debian stable were made by Markus Koschany to deal with security vulnerabilities and by Chris Lamb to deal with some non-security bugs. As always, the LTS strives to provide high quality updates to packages under the direct purview of the LTS team while also rendering assistance to maintainers, the stable security team, and upstream developers whenever practical.

Debian LTS contributors In November, 18 contributors have been paid to work on Debian LTS, their reports are available:
  • Abhijith PA did 7.0h (out of 0h assigned and 14.0h from previous period), thus carrying over 7.0h to the next month.
  • Adrian Bunk did 15.0h (out of 14.0h assigned and 9.75h from previous period), thus carrying over 8.75h to the next month.
  • Anton Gladky did 10.0h (out of 9.5h assigned and 5.5h from previous period), thus carrying over 5.0h to the next month.
  • Bastien Roucari s did 16.0h (out of 18.25h assigned and 1.75h from previous period), thus carrying over 4.0h to the next month.
  • Ben Hutchings did 12.0h (out of 16.5h assigned and 12.25h from previous period), thus carrying over 16.75h to the next month.
  • Chris Lamb did 18.0h (out of 17.25h assigned and 0.75h from previous period).
  • Emilio Pozuelo Monfort did 15.5h (out of 23.5h assigned and 0.25h from previous period), thus carrying over 8.25h to the next month.
  • Guilhem Moulin did 13.0h (out of 12.0h assigned and 8.0h from previous period), thus carrying over 7.0h to the next month.
  • Lee Garrett did 14.5h (out of 16.75h assigned and 7.0h from previous period), thus carrying over 9.25h to the next month.
  • Markus Koschany did 30.0h (out of 30.0h assigned).
  • Ola Lundqvist did 6.5h (out of 8.25h assigned and 15.5h from previous period), thus carrying over 17.25h to the next month.
  • Roberto C. S nchez did 5.5h (out of 12.0h assigned), thus carrying over 6.5h to the next month.
  • Santiago Ruano Rinc n did 3.25h (out of 13.62h assigned and 2.375h from previous period), thus carrying over 12.745h to the next month.
  • Sean Whitton did 3.25h (out of 10.0h assigned), thus carrying over 6.75h to the next month.
  • Sylvain Beucler did 10.0h (out of 13.5h assigned and 10.25h from previous period), thus carrying over 13.75h to the next month.
  • Thorsten Alteholz did 14.0h (out of 14.0h assigned).
  • Tobias Frost did 12.0h (out of 12.0h assigned).
  • Utkarsh Gupta did 0.0h (out of 6.0h assigned and 17.75h from previous period), thus carrying over 23.75h to the next month.

Evolution of the situation In November, we have released 35 DLAs.

Thanks to our sponsors Sponsors that joined recently are in bold.

5 December 2023

Louis-Philippe V ronneau: Montreal's Debian & Stuff - November 2023

Hello from a snowy Montr al! My life has been pretty busy lately1 so please forgive this late report. On November 19th, our local Debian User Group met at Montreal's most prominent hackerspace, Foulab. We've been there a few times already, but since our last visit, Foulab has had some membership/financial troubles. Happy to say things are going well again and a new team has taken over the space. This meetup wasn't the most productive day for me (something about being exhausted apparently makes it hard to concentrate), but other people did a bunch of interesting stuff :) Pictures Here are a bunch of pictures I took! Foulab is always a great place to snap quirky things :) A sign on a whiteboard that says 'Bienvenue aux laboratoires qui rends fou' The entrance of the bio-hacking house, with a list of rules An exploded keyboard with a 'Press F1 to continue' sign An inflatable Tux with a Foulab T-Shirt A picture of the woodworking workshop

  1. More busy than the typical end of semester rush... At work, we are currently renegotiating our collective bargaining agreement and things aren't going so well. We went on strike for a few days already and we're planning on another 7 days starting on Friday 8th.

28 November 2023

Dirk Eddelbuettel: RcppSimdJson 0.1.11 on CRAN: Maintenance

A new maintenance release 0.1.11 of the RcppSimdJson package is now on CRAN. RcppSimdJson wraps the fantastic and genuinely impressive simdjson library by Daniel Lemire and collaborators. Via very clever algorithmic engineering to obtain largely branch-free code, coupled with modern C++ and newer compiler instructions, it results in parsing gigabytes of JSON parsed per second which is quite mindboggling. The best-case performance is faster than CPU speed as use of parallel SIMD instructions and careful branch avoidance can lead to less than one cpu cycle per byte parsed; see the video of the talk by Daniel Lemire at QCon. This release responds to a CRAN request to address issues now identified by -Wformat -Wformat-security. These are frequently pretty simple changes as it was here: all it took was an call to compileAttributes() from an updated Rcpp version which now injects "%s" as a format string when calling Rf_error(). The (very short) NEWS entry for this release follows.

Changes in version 0.1.11 (2023-11-28)
  • RcppExports.cpp has been regenerated under an update Rcpp to address a print format warning (Dirk in #88).

Courtesy of my CRANberries, there is also a diffstat report for this release. For questions, suggestions, or issues please use the issue tracker at the GitHub repo. If you like this or other open-source work I do, you can now sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Enrico Zini: Introducing Debusine

Abstract Debusine manages scheduling and distribution of Debian-related tasks (package build, lintian analysis, autopkgtest runs, etc.) to distributed worker machines. It is being developed by Freexian with the intention of giving people access to a range of pre-configured tools and workflows running on remote hardware. Freexian obtained STF funding for a substantial set of Debusine milestones, so development is happening on a clear schedule. We can present where we are and, we're going to be, and what we hope to bring to Debian with this work.

26 November 2023

Ian Jackson: Hacking my filter coffee machine

I hacked my coffee machine to let me turn it on from upstairs in bed :-). Read on for explanation, circuit diagrams, 3D models, firmware source code, and pictures. Background: the Morphy Richards filter coffee machine I have a Morphy Richards filter coffee machine. It makes very good coffee. But the display and firmware are quite annoying: Also, I m lazy and wanted to be able to cause coffee to exist from upstairs in bed, without having to make a special trip down just to turn the machine on. Planning My original feeling was I can t be bothered dealing with the coffee machine innards so I thought I would make a mechanical contraption to physically press the coffee machine s on button. I could have my contraption press the button to turn the machine on (timed, or triggered remotely), and then periodically in pairs to reset the 25-minute keep-warm timer. But a friend pointed me at a blog post by Andy Bradford, where Andy recounts modifying his coffee machine, adding an ESP8266 and connecting it to his MQTT-based Home Assistant setup. I looked at the pictures and they looked very similar to my machine. I decided to take a look inside. Inside the Morphy Richards filter coffee machine My coffee machine seemed to be very similar to Andy s. His disassembly report was very helpful. Inside I found the high-voltage parts with the heating elements, and the front panel with the display and buttons. I spent a while poking about, masuring things, and so on. Unexpected electrical hazard At one point I wanted to use my storage oscilloscope to capture the duration and amplitude of the beep signal. I needed to connect the scope ground to the UI board s ground plane, but then when I switched the coffee machine on at the wall socket, it tripped the house s RCD. It turns out that the low voltage UI board is coupled to the mains. In my setting, there s an offset of about 8V between the UI board ground plane, and true earth. (In my house the neutral is about 2-3V away from true earth.) This alarmed me rather. To me, this means that my modifications needed to still properly electrically isolate everything connected to the UI board from anything external to the coffee machine s housing. In Andy s design, I think the internal UI board ground plane is directly brought out to an external USB-A connector. This means that if there were a neutral fault, the USB-A connector would be at live potential, possibly creating an electrocution or fire hazard. I made a comment in Andy Bradford s blog, reporting this issue, but it doesn t seem to have appeared. This is all quite alarming. I hope Andy is OK! Design approach I don t have an MQTT setup at home, or an installation of Home Assistant. I didn t feel like adding a lot of complicated software to my life, if I could avoid it. Nor did I feel like writing a web UI myself. I ve done that before, but I m lazy and in this case my requirements were quite modest. Also, the need for electrical isolation would further complicate any attempt to do something sophisticated (that could, for example, sense the state of the coffee machine). I already had a Tasmota-based cloud-free smart plug, which controls the fairy lights on our gazebo. We just operate that through its web UI. So, I decided I would add a small and stupid microcontroller. The microcontroller would be powered via a smart plug and an off-the-shelf USB power supply. The microcontroller would have no inputs. It would simply simulate an on button press once at startup, and thereafter two presses every 24 minutes. After the 4th double press the microcontroller would stop, leaving the coffee machine to time out itself, after a total period of about 2h. Implementation - hardware I used a DigiSpark board with an ATTiny85. One of the GPIOs is connected to an optoisolator, whose output transistor is wired across the UI board s on button. circuit diagram; board layout diagram; (click for diagram scans as pdfs). The DigiSpark has just a USB tongue, which is very wobbly in a normal USB socket. I designed a 3D printed case which also had an approximation of the rest of the USB A plug. The plug is out of spec; our printer won t go fine enough - and anyway, the shield is supposed to be metal, not fragile plastic. But it fit in the USB PSU I was using, satisfactorily if a bit stiffly, and also into the connector for programming via my laptop. Inside the coffee machine, there s the boundary between the original, coupled to mains, UI board, and the isolated low voltage of the microcontroller. I used a reasonably substantial cable to bring out the low voltage connection, past all the other hazardous innards, to make sure it stays isolated. I added a drain power supply resistor on another of the GPIOs. This is enabled, with a draw of about 30mA, when the microcontroller is soon going to off / on cycle the coffee machine. That reduces the risk that the user will turn off the smart plug, and turn off the machine, but that the microcontroller turns the coffee machine back on again using the remaining power from USB PSU. Empirically in my setup it reduces the time from smart plug off to microcontroller stops from about 2-3s to more like 1s. Optoisolator board (inside coffee machine) pictures (Click through for full size images.) optoisolator board, front; optoisolator board, rear; optoisolator board, fitted. Microcontroller board (in USB-plug-ish housing) pictures microcontroller board, component side; microcontroller board, wiring side, part fitted; microcontroller in USB-plug-ish housing. Implementation - software I originally used the Arduino IDE, writing my program in C. I had a bad time with that and rewrote it in Rust. The firmware is in a repository on Debian s gitlab Results I can now cause the coffee to start, from my phone. It can be programmed more than 12h in advance. And it stays warm until we ve drunk it. UI is worse There s one aspect of the original Morphy Richards machine that I haven t improved: the user interface is still poor. Indeed, it s now even worse: To turn the machine on, you probably want to turn on the smart plug instead. Unhappily, the power button for that is invisible in its installed location. In particular, in the usual case, if you want to turn it off, you should ideally turn off both the smart plug (which can be done with the button on it) and the coffee machine itself. If you forget to turn off the smart plug, the machine can end up being turned on, very briefly, a handful of times, over the next hour or two. Epilogue We had used the new features a handful of times when one morning the coffee machine just wouldn t make coffee. The UI showed it turning on, but it wouldn t get hot, so no coffee. I thought oh no, I ve broken it! But, on investigation, I found that the machine s heating element was open circuit (ie, completely broken). I didn t mess with that part. So, hooray! Not my fault. Probably, just being inverted a number of times and generally lightly jostled, had precipitated a latent fault. The machine was a number of years old. Happily I found a replacement, identical, machine, online. I ve transplanted my modification and now it all works well. Bonus pictures (Click through for full size images.) probing the innards; machine base showing new cable route.
edited 2023-11-26 14:59 UTC in an attempt to fix TOC links


comment count unavailable comments

22 November 2023

Valhalla's Things: PDF planners 2024

Posted on November 22, 2023
A few years ago I wrote a bit of code to generate a custom printable planner, precisely to my taste. And then I showed the result to other people, and added a few variants for their own tastes. And I ve just generated the first 2024 file (yes, this year I m late with the printing and binding), and realized that it may be worth posting all the variants on this blog, in case somebody else is interested in using them. The files with -book in the name have been imposed on A4 paper for a 16 pages signature. All of the fonts have been converted to paths, for ease of printing (yes, this means that customizing the font requires running the script, sorry). A few planners in English: The same planners, in Italian: And finally a monthly planner with ephemerids for the town of Como (I mean, everybody everywhere needs one of those, right?); here the --book files are impressed for a 3 sheet (12 pages) signature. I hereby release all the PDFs linked in this blog post under the CC0 license. I ve just realized that the git repository linked above does not have licensing information, but I m not sure what s the right thing to do, since it s mostly a dump of unsupported works-for-me code, but if you need it for something (that is compatible with its unsupported status) other than running it for personal use (for which afaik there is an implicit license) let me know and I ll push decide on a license higher on the stack of things to do :D

Next.

Previous.